Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions. Prior works on improving the logical reasoning ability of language models require complex processing of training data (e.g., aligning symbolic knowledge to text), yielding task-specific data augmentation solutions that restrict the learning of general logical reasoning skills. In this work, we propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities. We select a subset of Wikipedia, based on a set of logical inference keywords, for continued pretraining of a language model. We use two self-supervised loss functions: a modified masked language modeling loss where only specific parts-of-speech words, that would likely require more reasoning than basic language understanding, are masked, and a sentence-level classification loss that teaches the model to distinguish between entailment and contradiction types of sentences. The proposed training paradigm is both simple and independent of task formats. We demonstrate the effectiveness of APOLLO by comparing it with prior baselines on two logical reasoning datasets. APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
translated by 谷歌翻译
知识密集型任务,例如开放域问题答案(QA),需要访问大量的世界知识或领域知识。知识密集型任务的一种常见方法是采用检索到阅读的管道,该管道首先从诸如Wikipedia之类的外部语料库中检索少数相关的上下文文档,然后预测在检索文档的条件下得到答案。在本文中,我们提出了一种新的观点,可以通过用大型语言模型生成器代替文档检索器来解决知识密集型任务。我们称我们的方法生成-Read Read(GenRead),该方法首先提示大型语言模型根据给定问题生成上下文文档,然后读取生成的文档以产生最终答案。此外,我们提出了一种基于聚类的提示方法,该方法选择了不同的提示,从而产生了涵盖不同观点的生成文档,从而更好地回忆了可接受的答案。我们对三个不同的知识密集任务进行了广泛的实验,包括开放域质量检查,事实检查和对话系统。值得注意的是,GenRead在Triviaqa和WebQ上实现了71.6和54.4的精确匹配分数,显着超过了最先进的检索到+4.0和+3.9的最先进的dpr-fid,而无需从任何外部知识源中检索任何文档。最后,我们证明可以通过结合检索和生成来进一步提高模型性能。
translated by 谷歌翻译
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language. While the progress is promising, it is currently unclear if these models indeed perform logical reasoning by understanding the underlying logical semantics in the language. To this end, we propose RobustLR, a suite of evaluation datasets that evaluate the robustness of these models to minimal logical edits in rulebases and some standard logical equivalence conditions. In our experiments with RoBERTa and T5, we find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR, thus showing that the models are not robust to the proposed logical perturbations. Further, we find that the models find it especially hard to learn logical negation and disjunction operators. Overall, using our evaluation sets, we demonstrate some shortcomings of the deductive reasoning-based language models, which can eventually help towards designing better models for logical reasoning over natural language. All the datasets and code base have been made publicly available.
translated by 谷歌翻译
使用知识图(KGS)增强预培训的语言模型在各种型号推理任务方面取得了成功。但是,对于给定的任务实例,kg或kg的某些部分可能没有用。虽然kg-cugmented模型经常使用注意力集中在特定的kg组件上,但仍然始终使用kg,并且从未明确教授应该使用关注机制。同时,显着性方法可以测量kg特征(例如,图形,节点,路径)对模型进行正确预测的影响,从而解释了哪个kg特征是有用的。本文探讨了可用于提高kg增强模型的性能的显着性解释。首先,我们建议创建粗(是kg有用的?)和精细(kg中的节点/路径是有用的?)显着解释。其次,为了激励基于显着的监督,我们分析了Oracle kg-angimented模型,即直接使用显着解释作为引导他们注意的额外输入。第三,我们提出Salkg,kg-ug-anded模型的框架,以从粗糙和/或罚款解释中学习。给定从任务的培训集创建的显着解释,Salkg共同列举模型来预测解释,然后通过参加预测的解释突出显示的kg功能来解决任务。在三个型号QA基准(CSQA,OBQA,Codah)和一系列KG增强模型中,我们表明Salkg可以产生相当大的性能增益 - 对CSQA的绝对改善高达2.76%。
translated by 谷歌翻译
It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks.
translated by 谷歌翻译
In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
translated by 谷歌翻译
Test log-likelihood is commonly used to compare different models of the same data and different approximate inference algorithms for fitting the same probabilistic model. We present simple examples demonstrating how comparisons based on test log-likelihood can contradict comparisons according to other objectives. Specifically, our examples show that (i) conclusions about forecast accuracy based on test log-likelihood comparisons may not agree with conclusions based on other distributional quantities like means; and (ii) that approximate Bayesian inference algorithms that attain higher test log-likelihoods need not also yield more accurate posterior approximations.
translated by 谷歌翻译
Video Question Answering methods focus on commonsense reasoning and visual cognition of objects or persons and their interactions over time. Current VideoQA approaches ignore the textual information present in the video. Instead, we argue that textual information is complementary to the action and provides essential contextualisation cues to the reasoning process. To this end, we propose a novel VideoQA task that requires reading and understanding the text in the video. To explore this direction, we focus on news videos and require QA systems to comprehend and answer questions about the topics presented by combining visual and textual cues in the video. We introduce the ``NewsVideoQA'' dataset that comprises more than $8,600$ QA pairs on $3,000+$ news videos obtained from diverse news channels from around the world. We demonstrate the limitations of current Scene Text VQA and VideoQA methods and propose ways to incorporate scene text information into VideoQA methods.
translated by 谷歌翻译
医疗图像分类是图像识别领域中最关键的问题之一。该领域的主要挑战之一是缺乏标记的培训数据。此外,数据集通常会出现类不平衡,因为某些情况很少发生。结果,分类任务的准确性通常很低。特别是深度学习模型,在图像细分和分类问题上显示出令人鼓舞的结果,但它们需要很大的数据集进行培训。因此,需要从相同分布中生成更多的合成样品。先前的工作表明,特征生成更有效,并且比相应的图像生成更高。我们将此想法应用于医学成像领域。我们使用转移学习来训练针对金标准班级注释的小数据集的细分模型。我们提取了学习的功能,并使用它们使用辅助分类器GAN(ACGAN)来生成在类标签上进行调节的合成特征。我们根据其严重程度测试了下游分类任务中生成特征的质量。实验结果表明,这些生成特征的有效性及其对平衡数据和提高分类类别的准确性的总体贡献的结果有希望的结果。
translated by 谷歌翻译
模型预测控制(MPC)是一种最先进的(SOTA)控制技术,需要迭代地解决硬约束优化问题。对于不确定的动态,基于分析模型的强大MPC施加了其他约束,从而增加了问题的硬度。当需要在较少的时间内需要更多计算时,问题会加剧性能至关重要的应用程序。过去已经提出了数据驱动的回归方法,例如神经网络,以近似系统动力学。但是,在没有符号分析先验的情况下,此类模型依赖于大量标记的数据。这会产生非平凡的培训间接开销。物理知识的神经网络(PINN)以合理的精度获得了近似的普通微分方程(ODE)的非线性系统的吸引力。在这项工作中,我们通过PINNS(RAMP-NET)提出了一个强大的自适应MPC框架,该框架使用了一种神经网络,部分从简单的ODE中训练,部分是由数据训练的。物理损失用于学习代表理想动态的简单odes。访问损失函数内部的分析功能是正常化的,为参数不确定性执行了可靠的行为。另一方面,定期数据丢失用于适应剩余的干扰(非参数不确定性),在数学建模过程中未被误解。实验是在模拟环境中进行的,以进行四轨的轨迹跟踪。与两种基于SOTA回归的MPC方法相比,我们报告了7.8%至43.2%和8.04%和8.04%至61.5%的跟踪误差的降低。
translated by 谷歌翻译